5 - Artificial Intelligence II [ID:9048]
50 von 553 angezeigt

Good morning. Sorry to be late. We were talking about probabilities and we're trying to build

up the inferential machinery for talking about events in an uncertain world where we consider

or the agent considers a certain set of worlds as possible and tries to estimate the likelihood.

We need inference because we can't assess all the probabilities directly or we could possibly

and even then it would be too costly. So we want to kind of have or measure or estimate

or something like this certain probabilities and then we want to derive other probabilities

from that and that's really what we're doing and we were introducing probabilities, probabilities

as our belief state about the world. It's very important that you keep in mind that

the probabilities are not what actually happens in the world. There is actually rain or sunshine

is true or false but of course if you don't know you can only estimate likelihoods and

that's what probabilities do. They model our belief and lack of knowledge. Okay, so the

setup was just like in logic we have a language where we can express certain events, propositional

logic here and we have models and in the models we have these probability functions and in

the language we actually have exact descriptions of events. We build events from random variables,

having values, things like the probability that the event is cavity being true, you have

a cavity and the probability of that might be something like 0.2. We're only going to

do finite domain random variables and we'll very often use these vector-like abbreviations.

Okay, we looked at the language and the language of events is essentially given as all Boolean

formulae where all the propositional variables if you want are atomic events. One random

variable having one particular outcome and having a language with conjunction is nice

because that actually gives us the possibility to kind of not have to consider tuples of

events, tuples of outcomes. So the other thing why we're using propositional logic in our

variable in our events is that that corresponds very, very nicely to the set-like semantics

that's behind this and plays nice with that. So that's something you want. And of course

we've defined what the properties of these probability functions are. One of them is

summing up over everything gives you one, all the probabilities and you have this kind

of local probabilities if you sum up over the atomic events that satisfy an event then

you get the probability of the complex event. We have this looking downwards towards atoms

and compositionality feature here as well. Okay, we looked at Kolmogorov axioms which

is just another way of expressing these and then basically moved on to something more

interesting conditional probabilities. Given that we know A, what's the likelihood of

that? Relatively easy to define but very, very powerful. Allows us to kind of localise

probabilities. And of course to use knowledge. During inference as an agent progresses they

actually have a lot, they sense things and given that the sensors are reliable or we

take account for their unreliability then we actually have more and more knowledge and

more and more that we want to put into this B. Typically B is kind of what we know for

sure or assume in a certain context and as the agent lives on that is going to grow.

Okay, now we start doing inference and the only real problem here is that if we had all

probabilities if we had the full joint probability distribution we could in theory do everything

we want. In practice these objects are just much too big. So we want to use the fact that

empirically these high dimensional tables of probabilities have identical values in

many places. And they have these identical values in these many sometimes funny places

for reasons of how the world works. One of the most important is independence. Whether

you have cavities is independent of weather. If you have two dice their outcomes are independent.

What you do today is probably independent of what Susie Parker does in San Francisco.

You don't even know her, I hope. You might not be independent of what Trump does. You

hear something about him you need lots of beer in the evening because you're so frustrated.

But very often if we have independent events that leads to many identical values in the

joint probability distribution and we want to take advantage of these. And the full joint

probability distributions are not the right tools to do this, not the right tools to express

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:21:18 Min

Aufnahmedatum

2018-04-25

Hochgeladen am

2018-04-26 09:21:36

Sprache

en-US

Dieser Kurs beschäftigt sich mit den Grundlagen der Künstlichen Intelligenz (KI), insbesondere mit Techniken des Schliessens unter Unsicherheit, des maschinellen Lernens und dem Sprachverstehen. 
Der Kurs baut auf der Vorlesung Künstliche Intelligenz I vom Wintersemester auf und führt diese weiter. 

Lernziele und Kompetenzen
Fach- Lern- bzw. Methodenkompetenz

  • Wissen: Die Studierenden lernen grundlegende Repräsentationsformalismen und Algorithmen der Künstlichen Intelligenz kennen.

  • Anwenden: Die Konzepte werden an Beispielen aus der realen Welt angewandt (bungsaufgaben).

  • Analyse: Die Studierenden lernen über die Modellierung in der Maschine menschliche Intelligenzleistungen besser einzuschätzen.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen